Goto

Collaborating Authors

 modal logic


Logical characterizations of recurrent graph neural networks with reals and floats

Neural Information Processing Systems

In pioneering work from 2019, Barceló and coauthors identified logics that precisely match the expressive power of constant iteration-depth graph neural networks (GNNs) relative to properties definable in first-order logic. In this article, we give exact logical characterizations of recurrent GNNs in two scenarios: (1) in the setting with floating-point numbers and (2) with reals. For floats, the formalism matching recurrent GNNs is a rule-based modal logic with counting, while for reals we use a suitable infinitary modal logic, also with counting. These results give exact matches between logics and GNNs in the recurrent setting without relativising to a background logic in either case, but using some natural assumptions about floating-point arithmetic. Applying our characterizations, we also prove that, relative to graph properties definable in monadic second-order logic (MSO), our infinitary and rule-based logics are equally expressive. This implies that recurrent GNNs with reals and floats have the same expressive power over MSO-definable properties and shows that, for such properties, also recurrent GNNs with reals are characterized by a (finitary!)


Modal Logical Neural Networks

Sulc, Antonin

arXiv.org Artificial Intelligence

We propose Modal Logical Neural Networks (MLNNs), a neurosymbolic framework that integrates deep learning with the formal semantics of modal logic, enabling reasoning about necessity and possibility. Drawing on Kripke semantics, we introduce specialized neurons for the modal operators $\Box$ and $\Diamond$ that operate over a set of possible worlds, enabling the framework to act as a differentiable ``logical guardrail.'' The architecture is highly flexible: the accessibility relation between worlds can either be fixed by the user to enforce known rules or, as an inductive feature, be parameterized by a neural network. This allows the model to optionally learn the relational structure of a logical system from data while simultaneously performing deductive reasoning within that structure. This versatile construction is designed for flexibility. The entire framework is differentiable from end to end, with learning driven by minimizing a logical contradiction loss. This not only makes the system resilient to inconsistent knowledge but also enables it to learn nonlinear relationships that can help define the logic of a problem space. We illustrate MLNNs on four case studies: grammatical guardrailing, axiomatic detection of the unknown, multi-agent epistemic trust, and detecting constructive deception in natural language negotiation. These experiments demonstrate how enforcing or learning accessibility can increase logical consistency and interpretability without changing the underlying task architecture.



The Correspondence Between Bounded Graph Neural Networks and Fragments of First-Order Logic

Grau, Bernardo Cuenca, Feng, Eva, Wałęga, Przemysław A.

arXiv.org Artificial Intelligence

Graph Neural Networks (GNNs) address two key challenges in applying deep learning to graph-structured data: they handle varying size input graphs and ensure invariance under graph isomorphism. While GNNs have demonstrated broad applicability, understanding their expressive power remains an important question. In this paper, we propose GNN architectures that correspond precisely to prominent fragments of first-order logic (FO), including various modal logics as well as more expressive two-variable fragments. To establish these results, we apply methods from finite model theory of first-order and modal logics to the domain of graph representation learning. Our results provide a unifying framework for understanding the logical expressiveness of GNNs within FO.



A Modal Logic for Temporal and Jurisdictional Classifier Models

Di Florio, Cecilia, Dong, Huimin, Rotolo, Antonino

arXiv.org Artificial Intelligence

Logic-based models can be used to build verification tools for machine learning classifiers employed in the legal field. ML classifiers predict the outcomes of new cases based on previous ones, thereby performing a form of case-based reasoning (CBR). In this paper, we introduce a modal logic of classifiers designed to formally capture legal CBR. We incorporate principles for resolving conflicts between precedents, by introducing into the logic the temporal dimension of cases and the hierarchy of courts within the legal system.


Lecture Notes on Verifying Graph Neural Networks

Schwarzentruber, François

arXiv.org Artificial Intelligence

In these lecture notes, we first recall the connection between graph neural networks, Weisfeiler-Lehman tests and logics such as first-order logic and graded modal logic. We then present a modal logic in which counting modalities appear in linear inequalities in order to solve verification tasks on graph neural networks. We describe an algorithm for the satisfiability problem of that logic. It is inspired from the tableau method of vanilla modal logic, extended with reasoning in quantifier-free fragment Boolean algebra with Presburger arithmetic.




Graph Learning via Logic-Based Weisfeiler-Leman Variants and Tabularization

Jaakkola, Reijo, Janhunen, Tomi, Kuusisto, Antti, Ortiz, Magdalena, Selin, Matias, Šimkus, Mantas

arXiv.org Artificial Intelligence

Graph Neural Networks (GNNs) are a powerful solution in many settings where the data is naturally graph-structured; they enable us to learn not only from individual features of objects, but also from the connections between them [15, 20]. They make the graph topology a key part of the learning process, enabling more accurate results in a range of scenarios [19, 21]. However, graph learning is computationally expensive, and training GNNs is usually much slower than training for state of the art methods on tabular data [11, 5]. In this paper, we aim to achieve graph learning with simpler, more efficient methods for tabular data, after enriching graph nodes with information about the graph topology. Our stepping stone is the Weisfeiler-Leman algorithm (WL), a well-known technique for approximating graph isomorphism [18]. In addition to its many applications in graph theory and other areas of computer science, WL has been very influential in graph learning. It has been used in graph kernels, enabling algorithms such as support vector machines to operate directly on graphs [16, 11]. Crucially, it has been used to characterize the expressive power of several types of GNNs [8, 20, 1], suggesting WL as a suitable tool to distill the topological information that GNNs can learn from.